我们提出了Fibernet,一种估计\ emph {in-Vivo}的方法,从电动激活的多个导管记录中,人心房的心脏纤维结构。心脏纤维在心脏的电力功能中起着核心作用,但是它们很难确定体内,因此在现有心脏模型中很少有特定于患者的特定于患者。 Fibernet通过解决物理知识的神经网络的逆问题来学习纤维布置。逆问题等于从一组稀疏激活图中识别心脏传播模型的传导速度张量。多个地图的使用可以同时识别传导速度张量(包括局部纤维角)的所有组件。我们对合成2-D和3-D示例,扩散张量纤维和患者特异性病例进行广泛测试。我们表明,在存在噪声的情况下,也足以准确捕获纤维。随着地图的较少,正则化的作用变得突出。此外,我们表明拟合的模型可以稳健地重现看不见的激活图。我们设想,纤维网将帮助创建特定于患者的个性化医学模型。完整代码可在http://github.com/fsahli/fibernet上找到。
translated by 谷歌翻译
Training large, deep neural networks to convergence can be prohibitively expensive. As a result, often only a small selection of popular, dense models are reused across different contexts and tasks. Increasingly, sparsely activated models, which seek to decouple model size from computation costs, are becoming an attractive alternative to dense models. Although more efficient in terms of quality and computation cost, sparse models remain data-hungry and costly to train from scratch in the large scale regime. In this work, we propose sparse upcycling -- a simple way to reuse sunk training costs by initializing a sparsely activated Mixture-of-Experts model from a dense checkpoint. We show that sparsely upcycled T5 Base, Large, and XL language models and Vision Transformer Base and Large models, respectively, significantly outperform their dense counterparts on SuperGLUE and ImageNet, using only ~50% of the initial dense pretraining sunk cost. The upcycled models also outperform sparse models trained from scratch on 100% of the initial dense pretraining computation budget.
translated by 谷歌翻译
在处理现实世界优化问题时,决策者通常会面临与部分信息,未知参数或这些问题之间的复杂关系与问题决策变量相关的高度不确定性。在这项工作中,我们开发了一种新颖的机会限制学习(CCL)方法,重点是混合组合线性优化问题,该问题结合了机会约束和约束学习文献的思想。机会约束为要实现的单个或一组约束设定了概率置信度,而约束学习方法旨在通过预测模型对问题变量之间的功能关系进行建模。当我们需要为其响应变量设定进一步的界限时,就会出现一个主要问题之一:实现这些变量直接与预测模型的准确性及其概率行为有关。从这个意义上讲,CCL利用可线化的机器学习模型来估计学习变量的条件分位数,从而为机会约束提供了数据驱动的解决方案。已经开发了一个开放式软件,可以由从业人员使用。此外,在两个现实世界中的案例研究中已经测试了CCL的益处,证明当设定概率界限以进行学习的约束时,如何将鲁棒性添加到最佳解决方案中。
translated by 谷歌翻译
AI正在经历范式转变,随着模型的兴起(例如Bert,Dall-E,GPT-3),这些模型经过大规模的数据训练,并且可以适应广泛的下游任务。我们称这些模型基础模型来强调其至关重要但不完整的特征。该报告提供了基础模型的机会和风险的详尽说明,包括其功能(例如语言,愿景,机器人技术,推理,人类互动)和技术原则(例如,模型架构,培训程序,数据,系统,安全,安全性,评估,理论)对其应用(例如法律,医疗保健,教育)和社会影响(例如不平等,滥用,经济和环境影响,法律和道德考虑)。尽管基础模型基于标准的深度学习和转移学习,但它们的规模导致了新的新兴能力,以及它们在许多任务中的有效性都激发了同质化。同质化提供了强大的杠杆作用,但要求谨慎,因为基础模型的缺陷均由下游的所有适应模型继承。尽管即将广泛地部署基础模型,但我们目前对它们的工作方式,失败以及由于其新兴属性的影响而缺乏清晰的了解。为了解决这些问题,我们认为基础模型的许多批判性研究都需要与他们的基本社会技术性质相称。
translated by 谷歌翻译
本文提出了Kimera-Multi,第一个多机器人系统,(i)是强大的,并且能够识别和拒绝由感知混叠产生的不正确和内部机器人循环闭合,(ii)完全分布,仅依赖于本地(点对点)通信实现分布式本地化和映射,(iii)实时构建环境的全球一致的度量标准三维网状模型,其中网格的面部用语义标签注释。 Kimera-Multi由配备有视觉惯性传感器的机器人团队实现。每个机器人都构建了局部轨迹估计和使用Kimera的本地网格。当通信可用时,机器人基于一种基于新型分布式刻度非凸性算法发起分布式地点识别和鲁棒姿态图优化协议。所提出的协议允许机器人通过利用机器人间循环闭合而鲁棒到异常值来改善其局部轨迹估计。最后,每个机器人使用其改进的轨迹估计来使用网格变形技术来校正本地网格。我们在光逼真模拟,SLAM基准测试数据集中展示了Kimera-Multi,以及使用地机器人收集的靠户外数据集。真实和模拟实验都涉及长轨迹(例如,每个机器人高达800米)。实验表明,在鲁棒性和准确性方面,kimera-multi(i)优于现有技术,(ii)在完全分布的同时实现与集中式大满贯系统相当的估计误差,(iii)在通信带宽方面是显着的(iv)产生精确的公制语义3D网格,并且(v)是模块化的,也可以用于标准3D重建(即,没有语义标签)或轨迹估计(即,不重建3D网格)。
translated by 谷歌翻译
Existing automated techniques for software documentation typically attempt to reason between two main sources of information: code and natural language. However, this reasoning process is often complicated by the lexical gap between more abstract natural language and more structured programming languages. One potential bridge for this gap is the Graphical User Interface (GUI), as GUIs inherently encode salient information about underlying program functionality into rich, pixel-based data representations. This paper offers one of the first comprehensive empirical investigations into the connection between GUIs and functional, natural language descriptions of software. First, we collect, analyze, and open source a large dataset of functional GUI descriptions consisting of 45,998 descriptions for 10,204 screenshots from popular Android applications. The descriptions were obtained from human labelers and underwent several quality control mechanisms. To gain insight into the representational potential of GUIs, we investigate the ability of four Neural Image Captioning models to predict natural language descriptions of varying granularity when provided a screenshot as input. We evaluate these models quantitatively, using common machine translation metrics, and qualitatively through a large-scale user study. Finally, we offer learned lessons and a discussion of the potential shown by multimodal models to enhance future techniques for automated software documentation.
translated by 谷歌翻译
User equipment is one of the main bottlenecks facing the gaming industry nowadays. The extremely realistic games which are currently available trigger high computational requirements of the user devices to run games. As a consequence, the game industry has proposed the concept of Cloud Gaming, a paradigm that improves gaming experience in reduced hardware devices. To this end, games are hosted on remote servers, relegating users' devices to play only the role of a peripheral for interacting with the game. However, this paradigm overloads the communication links connecting the users with the cloud. Therefore, service experience becomes highly dependent on network connectivity. To overcome this, Cloud Gaming will be boosted by the promised performance of 5G and future 6G networks, together with the flexibility provided by mobility in multi-RAT scenarios, such as WiFi. In this scope, the present work proposes a framework for measuring and estimating the main E2E metrics of the Cloud Gaming service, namely KQIs. In addition, different machine learning techniques are assessed for predicting KQIs related to Cloud Gaming user's experience. To this end, the main key quality indicators (KQIs) of the service such as input lag, freeze percent or perceived video frame rate are collected in a real environment. Based on these, results show that machine learning techniques provide a good estimation of these indicators solely from network-based metrics. This is considered a valuable asset to guide the delivery of Cloud Gaming services through cellular communications networks even without access to the user's device, as it is expected for telecom operators.
translated by 谷歌翻译
Visual representations can be defined as the activations of neuronal populations in response to images. The activation of a neuron as a function over all image space has been described as a "tuning landscape". As a function over a high-dimensional space, what is the structure of this landscape? In this study, we characterize tuning landscapes through the lens of level sets and Morse theory. A recent study measured the in vivo two-dimensional tuning maps of neurons in different brain regions. Here, we developed a statistically reliable signature for these maps based on the change of topology in level sets. We found this topological signature changed progressively throughout the cortical hierarchy, with similar trends found for units in convolutional neural networks (CNNs). Further, we analyzed the geometry of level sets on the tuning landscapes of CNN units. We advanced the hypothesis that higher-order units can be locally regarded as isotropic radial basis functions, but not globally. This shows the power of level sets as a conceptual tool to understand neuronal activations over image space.
translated by 谷歌翻译
With the ever-growing model size and the limited availability of labeled training data, transfer learning has become an increasingly popular approach in many science and engineering domains. For classification problems, this work delves into the mystery of transfer learning through an intriguing phenomenon termed neural collapse (NC), where the last-layer features and classifiers of learned deep networks satisfy: (i) the within-class variability of the features collapses to zero, and (ii) the between-class feature means are maximally and equally separated. Through the lens of NC, our findings for transfer learning are the following: (i) when pre-training models, preventing intra-class variability collapse (to a certain extent) better preserves the intrinsic structures of the input data, so that it leads to better model transferability; (ii) when fine-tuning models on downstream tasks, obtaining features with more NC on downstream data results in better test accuracy on the given task. The above results not only demystify many widely used heuristics in model pre-training (e.g., data augmentation, projection head, self-supervised learning), but also leads to more efficient and principled fine-tuning method on downstream tasks that we demonstrate through extensive experimental results.
translated by 谷歌翻译
Recently, there has been an interest in improving the resources available in Intrusion Detection System (IDS) techniques. In this sense, several studies related to cybersecurity show that the environment invasions and information kidnapping are increasingly recurrent and complex. The criticality of the business involving operations in an environment using computing resources does not allow the vulnerability of the information. Cybersecurity has taken on a dimension within the universe of indispensable technology in corporations, and the prevention of risks of invasions into the environment is dealt with daily by Security teams. Thus, the main objective of the study was to investigate the Ensemble Learning technique using the Stacking method, supported by the Support Vector Machine (SVM) and k-Nearest Neighbour (kNN) algorithms aiming at an optimization of the results for DDoS attack detection. For this, the Intrusion Detection System concept was used with the application of the Data Mining and Machine Learning Orange tool to obtain better results
translated by 谷歌翻译